On the convergence of two sequential Monte Carlo methods for maximum a posteriori sequence estimation and stochastic global optimization

نویسندگان

  • Joaquín Míguez
  • Dan Crisan
  • Petar M. Djuric
چکیده

This paper addresses the problem of maximum a posteriori (MAP) sequence estimation in general state-space models. We consider two algorithms based on the sequential Monte Carlo (SMC) methodology (also known as particle filtering). We prove that they produce approximations of the MAP estimator and that they converge almost surely. We also derive a lower bound for the number of particles that are needed to achieve a given approximation accuracy. In the last part of the paper, we investigate the application of particle filtering and MAP estimation to the global optimization of a class of (possibly non-convex and possibly nondifferentiable) cost functions. In particular, we show how to convert the cost-minimization problem into one of MAP sequence estimation for a state-space model that is “matched” to the cost of interest. We provide examples that illustrate the application of the methodology as well as numerical results.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Particle-kernel estimation of the filter density in state-space models

Sequential Monte Carlo (SMC) methods, also known as particle filters, are simulation-based recursive algorithms for the approximation of the a posteriori probability measures generated by state-space dynamical models. At any given time t, a SMC method produces a set of samples over the state space of the system of interest (often termed “particles”) that is used to build a discrete and random a...

متن کامل

Learning Sigmoid Belief Networks via Monte Carlo Expectation Maximization

Belief networks are commonly used generative models of data, but require expensive posterior estimation to train and test the model. Learning typically proceeds by posterior sampling, variational approximations, or recognition networks, combined with stochastic optimization. We propose using an online Monte Carlo expectationmaximization (MCEM) algorithm to learn the maximum a posteriori (MAP) e...

متن کامل

On Rates of Convergence for Stochastic Optimization Problems Under Non--Independent and Identically Distributed Sampling

In this paper we discuss the issue of solving stochastic optimization problems by means of sample average approximations. Our focus is on rates of convergence of estimators of optimal solutions and optimal values with respect to the sample size. This is a well-studied problem in case the samples are independent and identically distributed (i.e., when standard Monte Carlo simulation is used); he...

متن کامل

Maximum a posteriori voice conversion using sequential monte carlo methods

Many voice conversion algorithms are based on frame-wise mapping from source features into target features. This ignores the inherent temporal continuity that is present in speech and can degrade the subjective quality. In this paper, we propose to optimize the speech feature sequence after a frame-based conversion algorithm has been applied. In particular, we select the sequence of speech feat...

متن کامل

On Computational Methods for Nonlinear Estimation

The Bayesian approach provides a rather powerful framework for handling nonlinear, as well as linear, estimation problems. We can in fact pose a general solution to the nonlinear estimation problem. However, in the general case there does not exist any closed-form solution and we are forced to use approximate techniques. In this thesis we will study one such technique, the sequential Monte Carl...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Statistics and Computing

دوره 23  شماره 

صفحات  -

تاریخ انتشار 2013